This notebook contains an algorithm designed to profit off of the correlation between Apple's and Google's common stock in December 2012 to May 2012. It explores the process of developing the algorithm from conception to full maturation. The final algorithm returns 14.1% in only 128 trading days after commission fees and accounting for slippage.
This notebook and the data used in this example is located at http://github.com/agconti/AGCTrading.
In [12]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
from zipline.algorithm import TradingAlgorithm
from zipline.transforms import MovingAverage, batch_transform
from zipline.utils.factory import load_from_yahoo
from zipline.finance import slippage, commission
In [13]:
# Uncomment the line below to get the data for yourself, but it is already included in the above mentioned repository
#data = load_from_yahoo(stocks=['AAPL', 'GOOG'],start=datetime.datetime(2012,11,1), end=datetime.datetime(2013,5,8)); data.save('goog_aapl_tests.dat')
data = pd.load('goog_aapl_tests.dat')
In [14]:
fig = plt.figure(figsize(12,6))
#data.SPX.plot(label='S&P Benchmark')
data.GOOG.plot()
data.AAPL.plot()
legend(loc='best')
plt.title("Apple & Google Adj Closing Prices")
Out[14]:
Here is the most basic version of our algorithm and will be refined later in the notebook. It works by comparing the prices between Google and Apple,( self.pricea
for Apple and self.priceg
for Google), and then buys or sells the stock if the correlation, corr
, of the stocks in the last 5 days is greater than our set minimun correlation value, self.corr_tolerance
. To determine whether to buy or sell Google, it checks the movement of Apple's stock. If Apple went down yesterday relative to today, it buys Google; if apple's stock went up it sells Google.
Here's a look for the coded logic for buying and selling Google:
if self.priceg < self.priceglag and np.abs(corr) > self.corr_tolerance:
self.order('AAPL',-100)
self.sell_orders.append((data.AAPL.datetime, pval))
print "{dt}: Selling 100 AAPL shares.".format(dt = data.AAPL.datetime)
elif self.priceg > self.priceglag and np.abs(corr) > self.corr_tolerance:
self.order('AAPL',100)
self.buy_orders.append((data.AAPL.datetime, pval))
print "{dt}: Buying 100 AAPL shares.".format(dt = data.AAPL.datetime)
The algorithm use the same logic to trade Apple's stock. In this way, the pricing algorithm makes sure that the two stocks have indeed been following each other and then trades each stock by the other's divergence. The full algorithm is shown below:
In [15]:
class GOOG_AAPL_COR(TradingAlgorithm): # inherit from TradingAlgorithm
"""
Google Apple Correlation algorithm.
An algorithm designed to exploit the correlation in the movement of Apple's and
Google's common stock.
"""
def initialize(self):
self.capital_base = 1000000
self.pricea = 0
self.priceg = 0
self.window_length = 5
self.corr_tolerance = 0.9
self.max_notional = self.capital_base + 0.1
self.min_notional = self.capital_base * (-1)
self.pricealag = 0
self.priceglag = 0
self.buy_orders = []
self.sell_orders = []
self.corr_values_aapl = []
self.corr_values_goog = []
def handle_data(self, data):
self.pricea = data.AAPL.price
self.priceg = data.GOOG.price
# compute correlation between stocks
corr, pval = self.correlation(data, self.pricea, self.priceg)
print str(data.AAPL.datetime) + ': Pearson Correlation: ' + str(corr) # print results to console
#notional = self.portfolio.positions[data.AAPL].amount * pricea
#notional = notional + self.portfolio.positions[data.GOOG].amount * priceg
# Decision logic
if self.priceg < self.priceglag and np.abs(corr) > self.corr_tolerance:
self.order('AAPL',-100)
self.sell_orders.append((data.AAPL.datetime, pval)) # save the order pval and time for analysis later on
print "{dt}: Selling 100 AAPL shares.".format(dt = data.AAPL.datetime)
elif self.priceg > self.priceglag and np.abs(corr) > self.corr_tolerance:
self.order('AAPL',100)
self.buy_orders.append((data.AAPL.datetime, pval))
print "{dt}: Buying 100 AAPL shares.".format(dt = data.AAPL.datetime)
if self.pricea < self.pricealag and np.abs(corr) > self.corr_tolerance:
self.order('GOOG',-100)
self.sell_orders.append((data.GOOG.datetime, pval))
print "{dt}: Selling 100 GOOG shares.".format(dt = data.GOOG.datetime)
elif self.pricea > self.pricealag and np.abs(corr) > self.corr_tolerance:
self.order('GOOG',100)
self.buy_orders.append((data.GOOG.datetime, pval))
print "{dt}: Buying 100 GOOG shares.".format(dt = data.GOOG.datetime)
# lag price variables
self.pricealag = data.AAPL.price
self.priceglag = data.GOOG.price
# correlation method
def correlation(self, data, pricea, priceg):
self.corr_values_aapl.append(pricea)
self.corr_values_goog.append(priceg)
corr = pearsonr(self.corr_values_aapl[-self.window_length:], self.corr_values_goog[-self.window_length:])
return corr
In [16]:
GOOG_AAPL_COR = GOOG_AAPL_COR()
results_GOOG_AAPL_COR = GOOG_AAPL_COR.run(data)
The first chart is a plot of movement of Apple and Google's stock price during our simulation period. The second plots the date at which we bought and sold stock by the relative confidence of that trade. Our confidence is expressed as a p-value of the Pearson correlation coefficient. The lower the p-value, the more confident we are that the correlation is not random chance. The next two plots track our portfolio value and the returns of the portfolio throughout our simulation.
In [17]:
plt.figure(figsize(18,10), dpi=1600)
# plot 1
ax1 = plt.subplot(411)
data[['GOOG', 'AAPL']].plot(ax=ax1)
plt.ylabel('Price')
plt.setp(ax1.get_xticklabels(), visible=False) # hide plot 1 x axis labels
plt.title("GOOG_AAPL_COR Results")
# prepare data for plot 2
GOOG_AAPL_COR.buy_orders = np.asarray(GOOG_AAPL_COR.buy_orders)
GOOG_AAPL_COR.sell_orders = np.asarray(GOOG_AAPL_COR.sell_orders)
#plot 2
ax2 = plt.subplot(412, sharex=ax1)
plt.plot(GOOG_AAPL_COR.buy_orders[:,0], GOOG_AAPL_COR.buy_orders[:,1], color = 'g')
plt.plot(GOOG_AAPL_COR.sell_orders[:,0], GOOG_AAPL_COR.sell_orders[:,1], color = 'r')
plt.plot(GOOG_AAPL_COR.buy_orders[:,0], GOOG_AAPL_COR.buy_orders[:,1], '^', c='g', markersize=10, label='buy')
plt.plot(GOOG_AAPL_COR.sell_orders[:,0], GOOG_AAPL_COR.sell_orders[:,1], 'v', c='r', markersize=10, label='sell')
plt.ylabel('P-Value')
plt.grid(color = 'k')
plt.setp(ax2.get_xticklabels(), visible=False)
plt.legend()
# plot 3
ax3 = plt.subplot(413, sharex=ax1)
results_GOOG_AAPL_COR.portfolio_value.plot()
plt.ylabel('Portfolio Value')
plt.setp(ax3.get_xticklabels(), visible=False)
plt.legend()
#plot 4
ax4 = plt.subplot(414, sharex=ax1)
results_GOOG_AAPL_COR.portfolio_value.pct_change().plot()
plt.ylabel('Portfolio % Change')
plt.legend(loc='best')
Out[17]:
In [18]:
results_GOOG_AAPL_COR.portfolio_value.describe() # show the summary statistics for our portfolio value
Out[18]:
In [19]:
results_GOOG_AAPL_COR.portfolio_value.pct_change().apply(lambda x: x * 100).describe() # show the summary statistics for our returns
Out[19]:
In [20]:
# Plot the distribution of the returns
fig = plt.figure(figsize(8,4))
results_GOOG_AAPL_COR.portfolio_value.pct_change().dropna().plot(kind='kde')
plt.xlabel=('Returns')
title("The Distribution of the Returns for\nGOOG_AAPL_COR")
Out[20]:
As you can see, the performance of the algorithm is poor. On average, it bled 0.03% per day. On its best day it gained just under a percent, (0.98%), and on its worst it lost 2.852%. Throughout its trading our algorithm experienced a protracted drawdown and finished below its high-water mark at $952,641.669, a 4.735% loss. The upside? There is tons of potential.
Lets clean the signal from the noise by adjusting Apple and Google's returns by the S&P returns. Through this process we are reducing the systematic noise in the movements of the two stock prices and getting closer to the actual signals of the two firm's movements. Below is a graphic representation of the process:
In [21]:
fig = plt.figure(figsize(18, 10), dpi=1600)
ax1 = fig.add_subplot(211)
data.SPX.plot(label='S&P Benchmark')
data.GOOG.plot()
data.AAPL.plot()
plt.ylabel(r'Stock / Index Value')
legend(loc='best')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.title("Adj Closing Prices")
ax2 = fig.add_subplot(212, sharex=ax1)
(data.SPX.pct_change() - data.SPX.pct_change()).plot(label='S&P Benchmark, flatlined')
(data.GOOG.pct_change() - data.SPX.pct_change()).plot(label='GOOG adj for var in S&P')
(data.AAPL.pct_change() - data.SPX.pct_change()).plot(label='AAPL adj for var in S&P')
plt.ylabel("Daily Return")
legend(loc='best')
plt.title("Adj Closing Prices adjusted for S&P movements")
Out[21]:
From now on we will use the day's percent change, or the day's return, instead of the adjusted closing price of the stock. Before we decide whether to buy or sell, we adjust the return by the return of the S&P in the same period. Then we will trade the stock if they are within our correlation tolerance. If the return is positive for the opposing partner we buy the stock; if the return is negative we sell the stock.
The resulting trading algorithm is shown below:
In [22]:
class GOOG_AAPL_COR_V_SP500(TradingAlgorithm): # inherit from TradingAlgorithm
"""
Google Apple Correlation algorithm adjusted for movements in the S&P500.
An algorithm designed to exploit the correlation in the movement of Apple's and
Google's common stock.
"""
def initialize(self):
self.capital_base = 1000000
self.pricea = 0
self.priceg = 0
self.pricespx = 0
self.window_length = 5
self.corr_tolerance = 0.9
self.max_notional = self.capital_base + 0.1
self.min_notional = self.capital_base * (-1)
self.pricealag = 0
self.priceglag = 0
self.pricespxlag = 0
self.buy_orders = []
self.sell_orders = []
self.corr_values_aapl = []
self.corr_values_goog = []
def handle_data(self, data):
self.pricea = data.AAPL.price
self.priceg = data.GOOG.price
self.pricespx = data.SPX.price
# find percent change and substract movements from SPX
aapl_mov, goog_mov = self.spx_adjust(data, self.pricea, self.priceg, self.pricespx, self.pricealag, self.priceglag, self.pricespxlag)
# calculate the correlation between the stocks
corr, pval = self.correlation(data, self.pricea, self.priceg)
print str(data.AAPL.datetime) + ': Pearson Correlation: ' + str(corr) # print results to console
#notional = self.portfolio.positions[data.AAPL].amount * pricea
#notional = notional + self.portfolio.positions[data.GOOG].amount * priceg
# Decision Logic
if goog_mov < 0 and np.abs(corr) > self.corr_tolerance:
self.order('AAPL',-100)
self.sell_orders.append((data.AAPL.datetime, pval)) # save the order amount and time for analysis
print "{dt}: Selling 100 AAPL shares.".format(dt = data.AAPL.datetime)
elif goog_mov > 0 and np.abs(corr) > self.corr_tolerance:
self.order('AAPL',100)
self.buy_orders.append((data.AAPL.datetime, pval))
print "{dt}: Buying 100 AAPL shares.".format(dt = data.AAPL.datetime)
if aapl_mov < 0 and np.abs(corr) > self.corr_tolerance:
self.order('GOOG',-100)
self.sell_orders.append((data.GOOG.datetime, pval))
print "{dt}: Selling 100 GOOG shares.".format(dt = data.GOOG.datetime)
elif aapl_mov > 0 and np.abs(corr) > self.corr_tolerance:
self.order('GOOG',100)
self.buy_orders.append((data.GOOG.datetime, pval))
print "{dt}: Buying 100 GOOG shares.".format(dt = data.GOOG.datetime)
# lag price variables
self.pricealag = data.AAPL.price
self.priceglag = data.GOOG.price
self.pricespxlag = data.SPX.price
def correlation(self, data, pricea, priceg):
self.corr_values_aapl.append(self.pricea)
self.corr_values_goog.append(self.priceg)
corr = pearsonr(self.corr_values_aapl[-self.window_length:], self.corr_values_goog[-self.window_length:]) # [0] only get the correltion
return corr
def per_change(self, data, begin, end):
percent_change = ((end - begin)/begin)
return percent_change
def spx_adjust(self, data, pricea, priceg, pricespx, pricealag, priceglag, pricespxlag):
spx_change = self.per_change(data, pricespx, self.pricespxlag)
aapl_change = (self.per_change(data, pricea, self.pricealag) - spx_change)
goog_change = (self.per_change(data, priceg, self.priceglag) - spx_change)
return (aapl_change, goog_change)
In [23]:
GOOG_AAPL_COR_V_SP500 = GOOG_AAPL_COR_V_SP500()
results_GOOG_AAPL_COR_V_SP500 = GOOG_AAPL_COR_V_SP500.run(data)
In [24]:
fig = plt.figure(figsize(18,12), dpi=1600)
ax1 = plt.subplot(411)
(data.SPX.pct_change() - data.SPX.pct_change()).plot()
(data.GOOG.pct_change() - data.SPX.pct_change()).plot(label='GOOG')
(data.AAPL.pct_change() - data.SPX.pct_change()).plot(label='AAPL')
plt.ylabel('Daily Return')
legend(loc='best')
plt.title("Adj Closing Prices adjusted for S&P movements")
plt.setp(ax2.get_xticklabels(), visible=False)
GOOG_AAPL_COR_V_SP500.buy_orders = np.asarray(GOOG_AAPL_COR_V_SP500.buy_orders)
GOOG_AAPL_COR_V_SP500.sell_orders = np.asarray(GOOG_AAPL_COR_V_SP500.sell_orders)
plt.title("GOOG_AAPL_COR_V_SP500 Results")
ax2 = plt.subplot(412, sharex=ax1)
plt.plot(GOOG_AAPL_COR_V_SP500.buy_orders[:,0], GOOG_AAPL_COR_V_SP500.buy_orders[:,1], color = 'r')
plt.plot(GOOG_AAPL_COR_V_SP500.sell_orders[:,0], GOOG_AAPL_COR_V_SP500.sell_orders[:,1], color = 'g')
plt.plot(GOOG_AAPL_COR_V_SP500.buy_orders[:,0], GOOG_AAPL_COR_V_SP500.buy_orders[:,1], '^', c='g', markersize=10, label='buy')
plt.plot(GOOG_AAPL_COR_V_SP500.sell_orders[:,0], GOOG_AAPL_COR_V_SP500.sell_orders[:,1], 'v', c='r', markersize=10, label='sell')
plt.ylabel('P-Value')
plt.grid(b=True)
plt.setp(ax1.get_xticklabels(), visible=False)
plt.legend()
ax3 = plt.subplot(413, sharex=ax1)
results_GOOG_AAPL_COR_V_SP500.portfolio_value.plot()
plt.setp(ax3.get_xticklabels(), visible=False)
plt.ylabel('Portfolio Value')
plt.legend(loc='best')
ax4 = plt.subplot(414, sharex=ax1)
results_GOOG_AAPL_COR_V_SP500.portfolio_value.pct_change().plot()
plt.ylabel('Portfolio % Change')
plt.legend()
Out[24]:
In [25]:
results_GOOG_AAPL_COR_V_SP500.portfolio_value.describe()
Out[25]:
In [26]:
results_GOOG_AAPL_COR_V_SP500.portfolio_value.pct_change().apply(lambda x: x * 100).describe()
Out[26]:
In [27]:
# Plot the distribution of the returns
fig = plt.figure(figsize(8,4))
results_GOOG_AAPL_COR_V_SP500.portfolio_value.pct_change().dropna().plot(kind='kde')
plt.xlabel=('Returns')
title("The Distribution of the Returns for\nGOOG_AAPL_COR_V_SP500")
Out[27]:
Our algorithm performed well. The portfolio ending value was approximately $1,088,569.669, thus our algorithm returned approximately 8.86% in just 128 days of trading. If we assume this performance for the rest of the year the algorithm would return 17.60%. Not too shabby. During our simulation the returns had a standard deviation of only 0.466%, meaning that it remained relatively low in its generation of alpha. Its greatest single day drawdown was only 1.068%, on its best day it returned 3.173%.
Before we were only allowing our algorithm to trade 100 shares of either stock. Now, let’s let it trade as many shares as it’s able.
The secret sauce is this version is this line of code for Google and Apple respectively:
self.trade_size_aapl = round(((self.capital_base * 0.75) / float(self.pricea)))
self.trade_size_goog = round(((self.capital_base * 0.75) / float(self.priceg)))
This code allows the algorithm to fully utilize its assets under management by dividing the total sum of money allocated to the stock, (0.75% of our captial base) by the stock’s price to yield the maximum number of shares it can order. We constrain the allocation to 75% of assets under management with, self.capital_base * 0.75
. Setting both stocks under this constraint allows it to be 150% long or short if two instances of Apple and Google are bought or sold, thus allowing the algorithm to trade on margin.
The algorithm is constrained to the standard 50% margin requirements by these additions to the decision logic:
and self.orders < (2) # for buy orders
and self.orders > (-2) # for sell orders
self.orders
's value is tracked by adding a counter for every buy order and removing one for every sell order. This constrains the algorithm to a maximum of 2 orders in any direction and keeps it with in its margin allowance.
Until now our algorithms have been missing two key features of accurately simulating the performance of a trading strategy; slippage and commission. Because we are now trading with relatively large order sizes compared to our initial orders of 100 shares, (our new algorithm trades with fluctuating order sizes of over 1100 shares), we have to model the effect these larger orders may have on the market. At a very high level, price is a function of the quantity supplied and the quantity demanded. If large enough {buy, sell} orders are placed, they will cause a {shortage, surplus} in the market for that security thus {raising, lowering} its price. The effect of shifting the market price of a stock with large orders is called slippage, and so is a necessary component in simulating realistic performance. All of the previous algorithms assume that we can buy and sell the stocks for free, but sadly this isn’t the case for most of us. To model this transaction cost we will assume a fee of $9.99 per trade, as this is the current price for a standard retail account for many of the popular online brokers at the time of writing. This commission fee is effectively a tax on our returns and so its effect needs to be modeled if we want realistic results. The code to incorporate these features into our algorithm is shown below.
# set slippage in our model, as a function of our trade size and the average trading
# volume at the time of the trade
self.set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1))
#set commission $9.99 per trade. Cost per trade at TD Ameritrade at time of writing.
self.set_commission(commission.PerTrade(cost=9.99))
In [28]:
class GOOG_AAPL_COR_V_SP500_Optimized(TradingAlgorithm): # inherit from TradingAlgorithm
"""
Google Apple Correlation algorithm adjusted for movements in the S&P500.
An algorithm designed to exploit the correlation in the movement of Apple's and
Google's common stock.
"""
# need to add notional tollerances
def initialize(self):
self.capital_base = 1000000
self.pricea = 0
self.priceg = 0
self.pricespx = 0
self.trade_size_aapl = 0
self.trade_size_goog = 0
self.window_length = 5
self.corr_tolerance = 0.9
self.max_notional = self.capital_base + 0.1
self.min_notional = self.capital_base * (-1)
self.pricealag = 0
self.priceglag = 0
self.pricespxlag = 0
self.buy_orders = []
self.sell_orders = []
self.corr_values_aapl = []
self.corr_values_goog = []
self.orders = 0
# set slippage in our model, as a function of our trade size and the average trading
# volume at the time of the trade
self.set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1))
#set commission $9.99 per trade. Cost per trade at TD Ameritrade at time of writing.
self.set_commission(commission.PerTrade(cost=9.99))
def handle_data(self, data):
self.pricea = data.AAPL.price
self.priceg = data.GOOG.price
self.pricespx = data.SPX.price
# calculate trade size
self.trade_size_aapl = round(((self.capital_base * 0.75) / float(self.pricea)))
self.trade_size_goog = round(((self.capital_base * 0.75) / float(self.priceg)))
# find percent change and substract movements from SPX
aapl_mov, goog_mov = self.spx_adjust(
data, self.pricea, self.priceg, self.pricespx,
self.pricealag, self.priceglag, self.pricespxlag
)
# calculate the correlation between the stocks
corr, pval = self.correlation(data, self.pricea, self.priceg)
print str(data.AAPL.datetime) + ': Pearson Correlation: ' + str(corr)
#notional = self.portfolio.positions[data.AAPL].amount * pricea
#notional = notional + self.portfolio.positions[data.GOOG].amount * priceg
# Decision Logic
if goog_mov < 0 and np.abs(corr) > self.corr_tolerance and self.orders > (-2):
self.order('AAPL',-(self.trade_size_aapl))
self.sell_orders.append((data.AAPL.datetime, pval))
self.orders -= 1
print "{dt}: Selling {ts} AAPL shares.".format(dt = data.AAPL.datetime, ts = self.trade_size_aapl)
elif goog_mov > 0 and np.abs(corr) > self.corr_tolerance and self.orders < (2):
self.order('AAPL',(self.trade_size_aapl))
self.buy_orders.append((data.AAPL.datetime, pval))
self.orders += 1
print "{dt}: Buying {ts} AAPL shares.".format(dt = data.AAPL.datetime, ts = self.trade_size_aapl)
if aapl_mov < 0 and np.abs(corr) > self.corr_tolerance and np.abs(self.orders) > (-2):
self.order('GOOG',-(self.trade_size_goog))
self.sell_orders.append((data.GOOG.datetime, pval))
self.orders -= 1
print "{dt}: Selling {ts} GOOG shares.".format(dt = data.GOOG.datetime, ts = self.trade_size_goog)
elif aapl_mov > 0 and np.abs(corr) > self.corr_tolerance and np.abs(self.orders) < (2):
self.order('GOOG', self.trade_size_goog)
self.buy_orders.append((data.GOOG.datetime, pval))
self.orders += 1
print "{dt}: Buying {ts} GOOG shares.".format(dt = data.GOOG.datetime, ts = self.trade_size_goog)
# lag price variables
self.pricealag = data.AAPL.price
self.priceglag = data.GOOG.price
self.pricespxlag = data.SPX.price
def correlation(self, data, pricea, priceg):
self.corr_values_aapl.append(pricea)
self.corr_values_goog.append(priceg)
corr = pearsonr(self.corr_values_aapl[-self.window_length:], self.corr_values_goog[-self.window_length:])
return corr
def per_change(self, data, begin, end):
percent_change = ((end - begin)/begin)
return percent_change
def spx_adjust(self, data, pricea, priceg, pricespx, pricealag, priceglag, pricespxlag):
spx_change = self.per_change(data, pricespx, self.pricespxlag)
aapl_change = (self.per_change(data, pricea, self.pricealag) - spx_change)
goog_change = (self.per_change(data, priceg, self.priceglag) - spx_change)
return (aapl_change, goog_change)
In [29]:
GOOG_AAPL_COR_V_SP500_Optimized = GOOG_AAPL_COR_V_SP500_Optimized()
results_GOOG_AAPL_COR_V_SP500_Optimized = GOOG_AAPL_COR_V_SP500_Optimized.run(data)
In [30]:
fig = plt.figure(figsize(18,12), dpi=1600)
plt.title("GOOG_AAPL_COR_V_SP500_Optimized Results")
# plot 1
ax1 = plt.subplot(411)
(data.SPX.pct_change() - data.SPX.pct_change()).plot()
(data.GOOG.pct_change() - data.SPX.pct_change()).plot(label='GOOG')
(data.AAPL.pct_change() - data.SPX.pct_change()).plot(label='AAPL')
plt.ylabel("Daily Return")
legend(loc='best')
plt.setp(ax2.get_xticklabels(), visible=False)
# prepare data for plot 2
GOOG_AAPL_COR_V_SP500_Optimized.buy_orders = np.asarray(GOOG_AAPL_COR_V_SP500_Optimized.buy_orders)
GOOG_AAPL_COR_V_SP500_Optimized.sell_orders = np.asarray(GOOG_AAPL_COR_V_SP500_Optimized.sell_orders)
# plot 2
ax2 = plt.subplot(412, sharex=ax1)
plt.plot(GOOG_AAPL_COR_V_SP500_Optimized.buy_orders[:,0], GOOG_AAPL_COR_V_SP500_Optimized.buy_orders[:,1], color = 'g')
plt.plot(GOOG_AAPL_COR_V_SP500_Optimized.sell_orders[:,0], GOOG_AAPL_COR_V_SP500_Optimized.sell_orders[:,1], color = 'r')
plt.plot(GOOG_AAPL_COR_V_SP500_Optimized.buy_orders[:,0], GOOG_AAPL_COR_V_SP500_Optimized.buy_orders[:,1], '^', c='g', markersize=10, label='buy')
plt.plot(GOOG_AAPL_COR_V_SP500_Optimized.sell_orders[:,0], GOOG_AAPL_COR_V_SP500_Optimized.sell_orders[:,1], 'v', c='r', markersize=10, label='sell')
plt.ylabel('P-Value')
plt.grid(b=True)
plt.setp(ax1.get_xticklabels(), visible=False)
plt.legend()
# plot 3
ax3 = plt.subplot(413, sharex=ax1)
results_GOOG_AAPL_COR_V_SP500_Optimized.portfolio_value.plot()
plt.setp(ax3.get_xticklabels(), visible=False)
plt.ylabel('Portfolio Value')
plt.legend(loc='best')
# plot 4
ax4 = plt.subplot(414, sharex=ax1)
results_GOOG_AAPL_COR_V_SP500_Optimized.portfolio_value.pct_change().plot()
plt.ylabel('Portfolio % Change')
plt.legend()
Out[30]:
In [31]:
results_GOOG_AAPL_COR_V_SP500_Optimized.portfolio_value.describe()
Out[31]:
In [32]:
results_GOOG_AAPL_COR_V_SP500_Optimized.portfolio_value.pct_change().apply(lambda x: x * 100).describe()
Out[32]:
In [33]:
# Plot the distribution of the returns
fig = plt.figure(figsize(8,4))
results_GOOG_AAPL_COR_V_SP500_Optimized.portfolio_value.pct_change().dropna().plot(kind='kde')
plt.xlabel=('Returns')
title("The Distribution of the Returns for\nGOOG_AAPL_COR_V_SP500_Optimized")
Out[33]:
Our algorithm performed well. It finished the 128 days of trading with a portfolio value of $1,140,775.184 million dollars, a return of 14.10%. On its single greatest day it retuned 8.133% and on its worst; it had a drawdown of 2.95%. On average it returned 0.109% a day, just about a 10th of a percent. The standard deviation of our returns was a mere 1.04%; meaning that despite the additional leverage, and so increased volatility, it remained a relatively low volatility generator of alpha. If this performance is assumed throughout the rest of the year, the algorithm would return would be 28.20%. This would greatly outpace the market and would have a superior risk profile, with regards to volatility, for the average investor
In [34]:
fig = plt.figure(figsize(18,8), dpi=1600)
ax1 = plt.subplot(211)
results_GOOG_AAPL_COR.portfolio_value.plot(label='GOOG_AAPL_COR')
results_GOOG_AAPL_COR_V_SP500.portfolio_value.plot(label='GOOG_AAPL_COR_V_SP500')
results_GOOG_AAPL_COR_V_SP500_Optimized.portfolio_value.plot(label='GOOG_AAPL_COR_V_SP500_Optimized')
plt.setp(ax3.get_xticklabels(), visible=False)
plt.ylabel('Portfolio Value')
plt.legend(loc='best')
plt.title('Aggreated Performance')
ax1 = plt.subplot(212, sharex=ax1)
results_GOOG_AAPL_COR.portfolio_value.pct_change().plot(label='GOOG_AAPL_COR')
results_GOOG_AAPL_COR_V_SP500.portfolio_value.pct_change().plot(label='GOOG_AAPL_COR_V_SP500')
results_GOOG_AAPL_COR_V_SP500_Optimized.portfolio_value.pct_change().plot(label='GOOG_AAPL_COR_V_SP500_Optimized')
plt.ylabel('Portfolio % Change')
plt.legend(loc='best')
Out[34]:
In [37]:
!git add "GOOG V. AAPL Correlation Arb".ipynb
In [38]:
!git commit
In [36]: